21 |
Asymmetries in relative clause comprehension in three European sign languages
|
|
|
|
In: Glossa: a journal of general linguistics; Vol 6, No 1 (2021); 72 ; 2397-1835 (2021)
|
|
BASE
|
|
Show details
|
|
22 |
Constituent order in Serbian Sign Language declarative clauses
|
|
|
|
In: Glossa: a journal of general linguistics; Vol 6, No 1 (2021); 39 ; 2397-1835 (2021)
|
|
BASE
|
|
Show details
|
|
24 |
Database of adnominal possessive constructions in the Malayo-Polynesian languages of Southeast Asia ...
|
|
|
|
BASE
|
|
Show details
|
|
25 |
Database of adnominal possessive constructions in the Malayo-Polynesian languages of Southeast Asia ...
|
|
|
|
BASE
|
|
Show details
|
|
27 |
Database of adnominal possessive constructions in the Malayo-Polynesian languages of Southeast Asia ...
|
|
|
|
BASE
|
|
Show details
|
|
28 |
Factors Behind the Effectiveness of an Unsupervised Neural Machine Translation System between Korean and Japanese
|
|
|
|
In: Applied Sciences ; Volume 11 ; Issue 16 (2021)
|
|
Abstract:
Korean and Japanese have different writing scripts but share the same Subject-Object-Verb (SOV) word order. In this study, we pre-train a language-generation model using a Masked Sequence-to-Sequence pre-training (MASS) method on Korean and Japanese monolingual corpora. When building the pre-trained generation model, we allow the smallest number of shared vocabularies between the two languages. Then, we build an unsupervised Neural Machine Translation (NMT) system between Korean and Japanese based on the pre-trained generation model. Despite the different writing scripts and few shared vocabularies, the unsupervised NMT system performs well compared to other pairs of languages. Our interest is in the common characteristics of both languages that make the unsupervised NMT perform so well. In this study, we propose a new method to analyze cross-attentions between a source and target language to estimate the language differences from the perspective of machine translation. We calculate cross-attention measurements between Korean–Japanese and Korean–English pairs and compare their performances and characteristics. The Korean–Japanese pair has little difference in word order and a morphological system, and thus the unsupervised NMT between Korean and Japanese can be trained well even without parallel sentences and shared vocabularies.
|
|
Keyword:
language typology; MASS; pre-trained generation model; SOV word order; unsupervised neural machine translation; writing script
|
|
URL: https://doi.org/10.3390/app11167662
|
|
BASE
|
|
Hide details
|
|
29 |
Characterizing the Typical Information Curves of Diverse Languages
|
|
|
|
In: Entropy ; Volume 23 ; Issue 10 (2021)
|
|
BASE
|
|
Show details
|
|
30 |
Similar but different: investigating temporal constructions in sign language
|
|
|
|
In: Glossa: a journal of general linguistics; Vol 6, No 1 (2021); 2 ; 2397-1835 (2021)
|
|
BASE
|
|
Show details
|
|
31 |
Conative calls to animals: From Arusa Maasai to a cross-linguistic prototype ...
|
|
|
|
BASE
|
|
Show details
|
|
32 |
Conative calls to animals: From Arusa Maasai to a cross-linguistic prototype ...
|
|
|
|
BASE
|
|
Show details
|
|
36 |
Inductive Bias and Modular Design for Sample-Efficient Neural Language Learning ...
|
|
|
|
BASE
|
|
Show details
|
|
37 |
Word classes in language contact
|
|
|
|
In: The Oxford Handbook of Word Classes ; https://halshs.archives-ouvertes.fr/halshs-03276022 ; The Oxford Handbook of Word Classes, In press (2021)
|
|
BASE
|
|
Show details
|
|
39 |
Universals of reference in discourse and grammar: Evidence from the Multi-CAST collection of spoken corpora
|
|
|
|
BASE
|
|
Show details
|
|
|
|